1,058 research outputs found
Deep Landscape Forecasting for Real-time Bidding Advertising
The emergence of real-time auction in online advertising has drawn huge
attention of modeling the market competition, i.e., bid landscape forecasting.
The problem is formulated as to forecast the probability distribution of market
price for each ad auction. With the consideration of the censorship issue which
is caused by the second-price auction mechanism, many researchers have devoted
their efforts on bid landscape forecasting by incorporating survival analysis
from medical research field. However, most existing solutions mainly focus on
either counting-based statistics of the segmented sample clusters, or learning
a parameterized model based on some heuristic assumptions of distribution
forms. Moreover, they neither consider the sequential patterns of the feature
over the price space. In order to capture more sophisticated yet flexible
patterns at fine-grained level of the data, we propose a Deep Landscape
Forecasting (DLF) model which combines deep learning for probability
distribution forecasting and survival analysis for censorship handling.
Specifically, we utilize a recurrent neural network to flexibly model the
conditional winning probability w.r.t. each bid price. Then we conduct the bid
landscape forecasting through probability chain rule with strict mathematical
derivations. And, in an end-to-end manner, we optimize the model by minimizing
two negative likelihood losses with comprehensive motivations. Without any
specific assumption for the distribution form of bid landscape, our model shows
great advantages over previous works on fitting various sophisticated market
price distributions. In the experiments over two large-scale real-world
datasets, our model significantly outperforms the state-of-the-art solutions
under various metrics.Comment: KDD 2019. The reproducible code and dataset link is
https://github.com/rk2900/DL
Product-based Neural Networks for User Response Prediction
Predicting user responses, such as clicks and conversions, is of great
importance and has found its usage in many Web applications including
recommender systems, web search and online advertising. The data in those
applications is mostly categorical and contains multiple fields; a typical
representation is to transform it into a high-dimensional sparse binary feature
representation via one-hot encoding. Facing with the extreme sparsity,
traditional models may limit their capacity of mining shallow patterns from the
data, i.e. low-order feature combinations. Deep models like deep neural
networks, on the other hand, cannot be directly applied for the
high-dimensional input because of the huge feature space. In this paper, we
propose a Product-based Neural Networks (PNN) with an embedding layer to learn
a distributed representation of the categorical data, a product layer to
capture interactive patterns between inter-field categories, and further fully
connected layers to explore high-order feature interactions. Our experimental
results on two large-scale real-world ad click datasets demonstrate that PNNs
consistently outperform the state-of-the-art models on various metrics.Comment: 6 pages, 5 figures, ICDM201
Real-Time Bidding by Reinforcement Learning in Display Advertising
The majority of online display ads are served through real-time bidding (RTB)
--- each ad display impression is auctioned off in real-time when it is just
being generated from a user visit. To place an ad automatically and optimally,
it is critical for advertisers to devise a learning algorithm to cleverly bid
an ad impression in real-time. Most previous works consider the bid decision as
a static optimization problem of either treating the value of each impression
independently or setting a bid price to each segment of ad volume. However, the
bidding for a given ad campaign would repeatedly happen during its life span
before the budget runs out. As such, each bid is strategically correlated by
the constrained budget and the overall effectiveness of the campaign (e.g., the
rewards from generated clicks), which is only observed after the campaign has
completed. Thus, it is of great interest to devise an optimal bidding strategy
sequentially so that the campaign budget can be dynamically allocated across
all the available impressions on the basis of both the immediate and future
rewards. In this paper, we formulate the bid decision process as a
reinforcement learning problem, where the state space is represented by the
auction information and the campaign's real-time parameters, while an action is
the bid price to set. By modeling the state transition via auction competition,
we build a Markov Decision Process framework for learning the optimal bidding
policy to optimize the advertising performance in the dynamic real-time bidding
environment. Furthermore, the scalability problem from the large real-world
auction volume and campaign budget is well handled by state value approximation
using neural networks.Comment: WSDM 201
- …